Current Issue : July-September Volume : 2024 Issue Number : 3 Articles : 5 Articles
The evolution of the current network has challenges of programmability, maintainability and manageability, due to network ossification. This challenge led to the concept of software-defined networking (SDN), to decouple the control system from the infrastructure plane caused by ossification. The innovation created a problem with controller placement. That is how to effectively place controllers within a network topology to manage the network of data plane devices from the control plane. The study was designed to empirically evaluate and compare the functionalities of two controller placement algorithms: the POCO and MOCO. The methodology adopted in the study is the explorative and comparative investigation techniques. The study evaluated the performances of the Pareto optimal combination (POCO) and multi-objective combination (MOCO) algorithms in relation to calibrated positions of the controller within a software-defined network. The network environment and measurement metrics were held constant for both the POCO and MOCO models during the evaluation. The strengths and weaknesses of the POCO and MOCO models were justified. The results showed that the latencies of the two algorithms in relation to the GoodNet network are 3100 ms and 2500 ms for POCO and MOCO respectively. In Switch to Controller Average Case latency, the performance gives 2598 ms and 2769 ms for POCO and MOCO respectively. In Worst Case Switch to Controller latency, the performance shows 2776 ms and 2987 ms for POCO and MOCO respectively. The latencies of the two algorithms evaluated in relation to the Savvis network, compared as follows: 2912 ms and 2784 ms for POCO and MOCO respectively in Switch to Controller Average Case latency, 3129 ms and 3017 ms for POCO and MOCO respectively in Worst Case Switch to Controller latency, 2789 ms and 2693 ms for POCO and MOCO respectively in Average Case Controller to Controller latency, and 2873 ms and 2756 ms for POCO and MOCO in Worst Case Switch to Controller latency respectively. The latencies of the two algorithms evaluated in relation to the AARNet, network compared as follows: 2473 ms and 2129 ms for POCO and MOCO respectively, in Switch to Controller Average Case latency, 2198 ms and 2268 ms for POCO and MOCO respectively, in Worst Case Switch to Controller latency, 2598 ms and 2471 ms for POCO and MOCO respectively, in Average Case Controller to Controller latency, 2689 ms and 2814 ms for POCO and MOCO respectively Worst Case Controller to Controller latency. The Average Case and Worst-Case latencies for Switch to Controller and Controller to Controller are minimal, and favourable to the POCO model as against the MOCO model when evaluated in the Goodnet, Savvis, and the Aanet networks. This simply indicates that the POCO model has a speed advantage as against the MOCO model, which appears to be more resilient than the POCO model....
Medium Frequency radio holds significance in modern society as it supports broadcasting and individual communications in the public, government, and military sectors. Enhancing the availability and quality of these communications is only possible by enhancing the understanding of medium frequency propagation. While traditional methods of radio wave propagation research can have a high material demand and cost, software defined radio presents itself as a versatile and low-cost platform for medium frequency signal reception and data acquisition. This paper details a research effort that utilizes software defined radio to help characterize medium frequency signal strength in relation to ionospheric and solar weather propagation determinants. Signal strength data from seven medium frequency stations of unique transmission locations and varying transmission powers were retrieved in 24-hour segments via a receiving loop antenna, Airspy HF+ Discovery software defined radio, and SDR Sharp software interface network. Retrieved data sets were visualized and analyzed in MATLAB for the identification of signal strength trends, which were subsequently compared to historical ionospheric and space weather indices in pursuit of a quantifiable correlation between such indices and medium frequency signal strengths. The results of the investigation prove that software defined radio, when used in conjunction with a receiving antenna and data analysis program, provides a versatile mechanism for cost-efficient propagation research....
In recent years, the notion of resilience has been developed and applied in many technical areas, becoming exceptionally pertinent to disaster risk science. During a disaster situation, accurate sensing information is the key to efficient recovery efforts. In general, resilience aims to minimize the impact of disruptions to systems through the fast recovery of critical functionality, but resilient design may require redundancy and could increase costs. In this article, we describe a method based on binary linear programming for sensor network design balancing efficiency with resilience. The application of the developed framework is demonstrated for the case of interior building surveillance utilizing infrared sensors in both twoand three-dimensional spaces. The method provides optimal sensor placement, taking into account critical functionality and a desired level of resilience and considering sensor type and availability. The problem formulation, resilience requirements, and application of the optimization algorithm are described in detail. Analysis of sensor locations with and without resilience requirements shows that resilient configuration requires redundancy in number of sensors and their intelligent placement. Both tasks are successfully solved by the described method, which can be applied to strengthen the resilience of sensor networks by design. The proposed methodology is suitable for large-scale optimization problems with many sensors and extensive coverage areas....
Quantum neural networks are expected to be a promising application in near-term quantum computing, but face challenges such as vanishing gradients during optimization and limited expressibility by a limited number of qubits and shallow circuits. To mitigate these challenges, an approach using distributed quantum neural networks has been proposed to make a prediction by approximating outputs of a large circuit using multiple small circuits. However, the approximation of a large circuit requires an exponential number of small circuit evaluations. Here, we instead propose to distribute partitioned features over multiple small quantum neural networks and use the ensemble of their expectation values to generate predictions. To verify our distributed approach, we demonstrate ten class classification of the Semeion and MNIST handwritten digit datasets. The results of the Semeion dataset imply that while our distributed approach may outperform a single quantum neural network in classification performance, excessive partitioning reduces performance. Nevertheless, for the MNIST dataset, we succeeded in ten class classification with exceeding 96% accuracy. Our proposed method not only achieved highly accurate predictions for a large dataset but also reduced the hardware requirements for each quantum neural network compared to a large single quantum neural network. Our results highlight distributed quantum neural networks as a promising direction for practical quantum machine learning algorithms compatible with near-term quantum devices. We hope that our approach is useful for exploring quantum machine learning applications....
As network technology advances and more people use devices, data storage has become a significant challenge due to the explosive growth of information and the threat of data leaks. In traditional medical institutions, most medical data is stored centrally through cloud computing technology in the institution’s data center. This centralized storage method has many security risks, and once the central server is attacked, it will lead to the loss of medical data, which will lead to the leakage of patients’ private data. At the same time, electronic medical records are the most critical data in the current medical field. In the traditional centralized healthcare service system (HSS), there are data leakage problems and tampering with electronic medical records due to human factors. At the same time, each hospital is built independently, resulting in the current centralized healthcare service system having a data silo problem, making it difficult to share medical data between institutions securely. With the increase in the number of users in the system, the electronic medical record data in the system also increases gradually, resulting in the increasing overhead of decryption calculation. Therefore, this paper proposes a blockchain-based access control scheme with multiparty authorization to ensure the security of electronic medical records. The scheme uses an SM encryption algorithm to encrypt the medical data in the system. It adds the patient’s signature to ensure the confidentiality and security of the data, and the encrypted electronic medical records (EMRs) are stored in the InterPlanetary File System (IPFS) to realize the distributed storage of EMR. In addition, role-based multiauthorization access control is implemented through smart contract-based to ensure the security of EMR. We have analyzed the security of this paper’s solution and compared its performance with the existing schemes based on other cryptographic algorithms. The experimental results show that the proposed solution significantly improves the secure sharing of EMR and provides system performance....
Loading....